3 research outputs found
Foundational Models in Medical Imaging: A Comprehensive Survey and Future Vision
Foundation models, large-scale, pre-trained deep-learning models adapted to a
wide range of downstream tasks have gained significant interest lately in
various deep-learning problems undergoing a paradigm shift with the rise of
these models. Trained on large-scale dataset to bridge the gap between
different modalities, foundation models facilitate contextual reasoning,
generalization, and prompt capabilities at test time. The predictions of these
models can be adjusted for new tasks by augmenting the model input with
task-specific hints called prompts without requiring extensive labeled data and
retraining. Capitalizing on the advances in computer vision, medical imaging
has also marked a growing interest in these models. To assist researchers in
navigating this direction, this survey intends to provide a comprehensive
overview of foundation models in the domain of medical imaging. Specifically,
we initiate our exploration by providing an exposition of the fundamental
concepts forming the basis of foundation models. Subsequently, we offer a
methodical taxonomy of foundation models within the medical domain, proposing a
classification system primarily structured around training strategies, while
also incorporating additional facets such as application domains, imaging
modalities, specific organs of interest, and the algorithms integral to these
models. Furthermore, we emphasize the practical use case of some selected
approaches and then discuss the opportunities, applications, and future
directions of these large-scale pre-trained models, for analyzing medical
images. In the same vein, we address the prevailing challenges and research
pathways associated with foundational models in medical imaging. These
encompass the areas of interpretability, data management, computational
requirements, and the nuanced issue of contextual comprehension.Comment: The paper is currently in the process of being prepared for
submission to MI
Unlocking Fine-Grained Details with Wavelet-based High-Frequency Enhancement in Transformers
Medical image segmentation is a critical task that plays a vital role in
diagnosis, treatment planning, and disease monitoring. Accurate segmentation of
anatomical structures and abnormalities from medical images can aid in the
early detection and treatment of various diseases. In this paper, we address
the local feature deficiency of the Transformer model by carefully re-designing
the self-attention map to produce accurate dense prediction in medical images.
To this end, we first apply the wavelet transformation to decompose the input
feature map into low-frequency (LF) and high-frequency (HF) subbands. The LF
segment is associated with coarse-grained features while the HF components
preserve fine-grained features such as texture and edge information. Next, we
reformulate the self-attention operation using the efficient Transformer to
perform both spatial and context attention on top of the frequency
representation. Furthermore, to intensify the importance of the boundary
information, we impose an additional attention map by creating a Gaussian
pyramid on top of the HF components. Moreover, we propose a multi-scale context
enhancement block within skip connections to adaptively model inter-scale
dependencies to overcome the semantic gap among stages of the encoder and
decoder modules. Throughout comprehensive experiments, we demonstrate the
effectiveness of our strategy on multi-organ and skin lesion segmentation
benchmarks. The implementation code will be available upon acceptance.
\href{https://github.com/mindflow-institue/WaveFormer}{GitHub}.Comment: Accepted in MICCAI 2023 workshop MLM
Medical Image Segmentation Review: The success of U-Net
Automatic medical image segmentation is a crucial topic in the medical domain
and successively a critical counterpart in the computer-aided diagnosis
paradigm. U-Net is the most widespread image segmentation architecture due to
its flexibility, optimized modular design, and success in all medical image
modalities. Over the years, the U-Net model achieved tremendous attention from
academic and industrial researchers. Several extensions of this network have
been proposed to address the scale and complexity created by medical tasks.
Addressing the deficiency of the naive U-Net model is the foremost step for
vendors to utilize the proper U-Net variant model for their business. Having a
compendium of different variants in one place makes it easier for builders to
identify the relevant research. Also, for ML researchers it will help them
understand the challenges of the biological tasks that challenge the model. To
address this, we discuss the practical aspects of the U-Net model and suggest a
taxonomy to categorize each network variant. Moreover, to measure the
performance of these strategies in a clinical application, we propose fair
evaluations of some unique and famous designs on well-known datasets. We
provide a comprehensive implementation library with trained models for future
research. In addition, for ease of future studies, we created an online list of
U-Net papers with their possible official implementation. All information is
gathered in https://github.com/NITR098/Awesome-U-Net repository.Comment: Submitted to the IEEE Transactions on Pattern Analysis and Machine
Intelligence Journa